The Polygraph Place

Thanks for stopping by our bulletin board.
Please take just a moment to register so you can post your own questions
and reply to topics. It is free and takes only a minute to register. Just click on the register link


  Polygraph Place Bulletin Board
  Professional Issues - Private Forum for Examiners ONLY
  Doubling up on CQs and then alternating as needed (Page 1)

Post New Topic  Post A Reply
profile | register | preferences | faq | search

This topic is 2 pages long:   1  2  next newest topic | next oldest topic
Author Topic:   Doubling up on CQs and then alternating as needed
Dan Mangan
Member
posted 12-28-2012 01:57 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Sometimes, even with truthful subjects, one or more CQs will tend to fall flat.

I'd like to know how y'all feel about reviewing twice as many CQs as you need during the pre-test, and building them into the PF question template as ready-to-go alternates.

It's not terribly rare to change CQs between charts if a subject is not productively reacting one way or the other. But how do you feel about having such flexibility already built in?

In other words, is there anything about using alternate CQs "on the fly" that would be problematic?

IP: Logged

Ted Todd
Member
posted 12-29-2012 05:32 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
Dan,
You bring up a great point and one that perhaps has some merit. I am curious about how you solidify or otherwise lock in your comparison questions. Given your stance on “open book”, no gimmicks or trickery and no magic pixie dust, this seems hard to do. Do you use DLCs exclusively? Again Dan, not a dig-just curious. And yes, I did fetch the Chief a Danish and coffee this morning. She pays me well for my efforts!

Happy New Year to you Dan and to all of the great minds(minus myself and Sackett) on this site!

Ted


Happy New Year!

Ted

IP: Logged

Dan Mangan
Member
posted 12-29-2012 07:15 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ted,

C'mon Ted, you know darn well that "pixie dust" is a dog whistle for inciting the non-believers!

But seriously, folks...

I shy away from DLCs and gravitate towards situational or futuristic CQs.

Here are two examples (for an infidelity case):

Would anyone who knows you well describe you as someone that your wife should not trust?

In the future, would you have sex with anyone besides your spouse if you could get away with it?

I find it easier to get buy-in on these kinds of questions, so, yes, the subject is committed to all the CQs -- including the alternates -- during the pre-test.

Since it isn't too hard to introduce a few more CQs, I have the flexibility to build them into the PF question template all at once.

The crux of my original question -- about alternating the CQs -- goes to whether using the alternates would negatively impact the overall process.

Let me ask it another way:

If I review -- and get buy-in -- on six CQs, then why should it matter if I use only the three "originals" (i.e., the CQs used in the first series), or alternate one, two or even three other established CQs in the subsequent series?

Happy new year,
Dan

[This message has been edited by Dan Mangan (edited 12-30-2012).]

IP: Logged

rnelson
Member
posted 01-02-2013 02:27 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Happy New Year everyone.

Of course, it is possible that those CQs "go flat" because the examinee is actually lying to the RQs.

The problem, as I see it, is that you are making deliberate choices to load the examination in favor of + scores, and doing so in a subjective manner.

I am not so sure it is common to adjust the CQs, although I understand that Backster has historically taught the adjustment of questions in between charts if the CQs or RQs were "defective." The problem with this is that the determination of "defective" requires a judgment to be made during the middle of the test, before all the data is collected. It is like scoring the test while conducting the test. Worse, it is like manipulating the scores while running the test.

Back in the early days of pioneer development in polygraph I think it would not have struck anyone as very odd or wrong to make these judgements or decisions. Back in the day there was less thought given to consistent procedural standards and things like normative data. In fact, I think it is likely that the success of early techniques like the R&I rested heavily on the skill of the examiner (in the absence of structural rules and normative data) to make strategic decisions about question repetetion. In fact, I think examiners got quite good at this, and it seems the value of the R&I technique today largely rests upon the clinical use of that kind of skill. As often occurs, certain assets become deficits in different contexts, and the practice of changing questions is inconsistent with a highly standardized polygraph approach that is more likely to satisfy things like Daubert requirements for replication and norm-referenced error estimates.

A test, in its basic form, is a simple matter of stimulus and response. Present the stimuli. Measure the response. Do that several times. Aggregate the data together. Compare the aggregated data to the norms and make a categorical decision/result/opinion based on predetermined requirement/tolerance for error (statistical significance).

So, the practice of deciding, during the middle of a test, that an examinee is reacting too much or not enough will become a very suspicious and unscientific practice - and will look a lot like an examiner who is manipulating the test result.

Just exactly how do we know, until the test is complete and all the data are available to score, how much is too much reaction? Or how much reaction is not enough reaction?

Today we have normative data, and we know what to expect in term of accuracy and error, when when we conduct the exams using well-studied question sequences consisting of a roughly equivalent number of RQs and CQs presented in a seemingly random (non-random) mixed order to the examinee. The basis for reaction is thought to be a combination of emotion, behavioral conditioning, and cognition regarding the test stimulus questions. A part of that cognition is called "novelty" which refers to noticing something new and different. Each RQ and each CQ is reviewed during the pretest and each is presented during the in-test data collection phase. Although the questions are all reviewed there are undoubtedly novelty effects that may occur the first time that each question is heard during the in-test phase.

Double the number of CQs and you could increase reactions due to novelty alone for the CQ, and not the RQs.

It is possible that the resulting change in scores could be accounted for by simply adjusting the norm-referenced decision cutscores to correspond to the desired alpha (error) tolerances that are typically set at .05.

Not studying the effect on normative data, and not adjusting the normative alpha cutscores amounts to potentially manipulating the test results.

This could work just fine, but really should be studied before anyone goes out and experiments on the public with new techniques.

Evidence-based practice would probably have us simply conduct the exam according to established procedures and then score the test after all the data are collected.

If we are working in the kind of situation for which we find it advantageous to make clinical or subjective judgement during the exam then there are good techniques for that (R&I).

.02

r


------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Dan Mangan
Member
posted 01-02-2013 02:42 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Thanks, Ray.

I'm definitely not looking to front-load the process with plus scores. Rather, I'm looking at ways to nip inconclusive results in the bud.

Our consumers -- therapists, chiefs of police, attorneys, spouses, employers -- want results. They don't want to hear "Your guy came up inconclusive."

And the paroled skinner who just forked over $375 of his own money doesn't want to hear that he has to dig deeper and do it all again because his mandatory PCSOT test was inconclusive.

Let me quote myself:

quote:
It's not terribly rare to change CQs between charts if a subject is not productively reacting one way or the other.

Please note that I said if a subject is not productively reacting one way or the other.

If a subject is not productively reacting one way or the other, that chart is a zero.

As for scoring as one goes along, Backster indeed teaches "spot analysis" and on-the-fly CQ modification. Indeed, that's the whole rationale behind Backster's Reaction Guide. Ditto for Matte and his even more complex technique.

I know you're down on expertise vs. science, but after running a thousand or so tests (and several thousand separate charts), one gets a sense of what a non-productive chart looks like.

Heck, even the newbies do -- or they should.

Ask any Backster grad about suffering "Tri-Zone Hell" under Cleve's tutelage.

Hypothetical...

Say you review six CQs and run six charts, with all the CQs being presented the same number of times (3). Does the novelty issue become moot then? If not, at what point, in your opinion, would the novelty issue become moot?

Dan

[This message has been edited by Dan Mangan (edited 01-02-2013).]

IP: Logged

rnelson
Member
posted 01-02-2013 08:16 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
The simple answer is this. The difference is moot when we have normative data for the technique as conducted.

Are there differences in novelty between and technique with 2 RQs and 3 CQs x 3 charts vx 2 RQs and 6 CQs and 6 charts? Are there differences in the normative data (average deceptive and truthful scores and standard deviations of those scores)? We don't know. Maybe. Maybe not. The only responsible way to answer this questions is not with opinion alone but by studying the question. Until then the most responsible thing to do is to forthrightly say that we don't yet know, and to remember that opinions are really just an un-studied hypotheses, and most hypothesis turn out to be incorrect when we actually study them.

For example: it was hypothesized that time-barred CQs made a big difference. Turned out they don't. It was hypothesized that PLCs and DLCs make a big difference. Utah studies in the meta-analysis show that normative data are actually no different for Utah PLC and Utah DLC.

If there is no difference then, of course, it makes no difference, and there is no need to wonder about the need for normative data for the new procedure. If there is a difference then we have to decide whether is is OK to use the new procedure without normative data, knowing that we are loading the results in some way.

Like this: my kid is going for an annual check-up. But he is kind of short and skinny, so I put him in platform shoes, make him wear three sweaters and keep rocks in his pockets so that his scores are more normal. Sounds wrong don't it? It is wrong. The point of testing is to find out what we've got, not to confuse or manipulate the scores.

The question is an ethical one just as much as a science and procedure question: do we want to experiment on the public? Or should we stick to proven methods?

Bottom line: the difference is moot when we study the technique and when we have normative data that allows us to calculate the level of statistical significance or probability of error represented by the numerical scores obtained using the new procedure. When we have normative data, and set alpha at .05, then whatever cutscore gives that alpha/error level is the one to use, regardless of what the numerical value of the cutscore actually is. To do otherwise is to make arbitrary, naive, and ill-informed judgments about the numerical integer cutscore instead of statistical judgments.

If we have normative data for a particular technique (and we do), and if we then lean on the scores or load them in one direction or another by altering the test procedure, and then try to apply the same numerical cutscores - without addressing the difference in normative data - then we are effectively destroying our ability to accurately and confidently answer scientific questions (Daubert type questions) about known error rates.

.02

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Dan Mangan
Member
posted 01-02-2013 08:26 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ray,

Are you suggesting that those trained in the Backster method abandon the rules taught to them in an APA-accredited school and dispense with the Backster Reaction Guide?

-and-

You said:

quote:
If we have normative data for a particular technique (and we do),...

The APA's meta-analytic survey involves 3,723 polygraph exams. Did the meta-analysis committee review the continuous, full-length video for each and every one of those polygraph exams?

If every exam's video wasn't reviewed, then clearly the vast majority of them were, right? Otherwise, you would be making wild assumptions -- something that runs counter to evidence-based practices.

Now, if the committee did not review any videos, then how do they know that some CQs were not modified -- or changed entirely -- during those exams?

Without verifying each exam individually by reviewing its video, then how do you know your claimed "normative" data is indeed normative?

Without reviewing each exam's video, how do you know what you are talking about?

Dan


[This message has been edited by Dan Mangan (edited 01-02-2013).]

IP: Logged

rnelson
Member
posted 01-03-2013 07:31 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
quote:
Are you suggesting that those trained in the Backster method abandon the rules taught to them in an APA-accredited school and dispense with the Backster Reaction Guide?

Of course not.

I am prepared to say that we have some understanding of the normative scores for the Backster technique when conducted using the established procedures.

To paraphrase the words of the NRC (2003) report: the confidence intervals surrounding observed accuracy with the available study sample data are sufficient to rule out random variation or bias in study design as possible explanation for the observed results, and no formal integrative hypothesis testing is required to assert this.

I understand about private-pay customers who don't like inconclusives. They feel like they are paying for answers not "I don't know."

BTW, I'm not down on expertise. I'm down on "expertizing" - in which we regard our expertise as a sufficient substitute for evidence - when we conveniently forget that a professional opinion is an opinion based on data, and that opinions not based on a replicatable interpretation of data and evidence are actually personal opinions even if they come from an "expert."

There is no doubt that expertise matters. There is also no doubt that experts need accountability also. Several fields of scientific are presently struggling in the wake of language analysis that can quite easily determine plagiarism and authorship, and new statistical methods that can begin to tell if people have cooked their data to achieve their results.

------------------------------

Your video question is so ridiculous that we are at risk for loosing contact with serious and useful discussion, and degrading into a session of mockery and attack. In which case, I will have other interesting things to do.

The real issue you raise, in asking the video question, is the one of representativeness. Research is always premised on certain assumptions and we should be able to state those out loud. For example: we would like to assume the samples are somehow representative. We do not assume that every exam was perfect. Perfection is impossible. If a perfect exam were needed, or possible, as representative of the population of all exams then we would not need large samples.

Instead we assume that there will be variation and that there will be imperfection. We also hope to be able to assume that the variation and imperfection in a large sample is representative of the imperfection and variation in population of all exams.

We also calculated and showed statistically (try not to fear that word) whether the different samples were representative of each other. As it turned out some samples were and some were not representative of each other for some examination techniques. The details are in the complete report. Our assumption (again with the assumptions, see) was that when samples are representative of each other there is a greater likelihood that they are representative of the population. This is because for the samples to represent or replicate each other will require three things: 1) examinees from the same population, 2) exams conducted using the same procedures, and 3) data scored an interpreted the same was. When the samples were not representative of each other one of these conditions is different. We loose confidence in our assumption that they are representative of each other, and we are forced to wonder which, if any, or all, of the samples are not representative of the population.

.02

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Dan Mangan
Member
posted 01-03-2013 10:13 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ray,

quote:
Your video question is so ridiculous that we are at risk for loosing contact with serious and useful discussion...

No Ray, it's a necessary question -- if you're serious about the "science," that is.

On the other hand, if the video question drops a cinderblock into the serene reflecting pool of the polygraph indu$try's self-penned validity narrative, then I can understand your displeasure.

Question: Can you conduct a credible QA review on a single exam without a careful analysis of the complete video?

Answer: No.

If you wouldn't give your blessing to a video-less QA for a single case, how is it that you can go out on a limb to the tune of 3,723 polygraph exams?

Without video review, you don't know if any given technique's protocols were faithfully executed.

Without video review, you don't know if a subject bled out (in between charts) on PLCQs, which then had to be modified, replaced or maybe even turned into DLCs.

Without video review, you don't know if the subject even understood the test questions.

Without video review, you don't know if external factors (noise, distractions, etc.) came to weigh against, or in favor of, the subject.

Without video review, you don't know if an examiner juiced certain questions with inflection, or other methods.

Without video review, you don't know if there were visible signs of CMs present in any given test.

Without video review, you don't know jack.

Bottom line: Without video review, your meta-analysis is a hypothetical pipe dream.

Or as ex-CIA examiner John Sullivan puts it, a scientific wild-ass guess. SWAG, for short.

Thank God there are judges -- and other well-informed, reasoned decision makers -- who see through this madness.

Dan


[This message has been edited by Dan Mangan (edited 01-03-2013).]

IP: Logged

skipwebb
Member
posted 01-03-2013 11:57 AM     Click Here to See the Profile for skipwebb   Click Here to Email skipwebb     Edit/Delete Message
Dan,

We often go to a doctor because we feel ill or have a sore throat or cough. the doctor will normally take our blood pressure, pulse rate, temperature and look into our ears and throat and listen for chest sounds. The result of that examination is often times inconclusive and his expert opinion is that additional testing is required in an effort to determine the possible cause of our malaise.

Obviously the additional testing (blood work, throat cultures, etc.) are more costly than the initial examination. We are then faced with two choices. We can decide we don't want to spend the additional money and walk out or we can invest in more testing in an effort to resolve the reason for the illness, which may or may not resolve the question that brought us to the doctor in the first place.

The same is true of mechanics and cars. If our car is "just not running quite right", the mechanic can put it on a diagnostic instrument and put it through the standard test series and attempt to determine the reason for the problem. If the test shows no direct issues then his diagnosis is no opinion or inconclusive and would require a further more intense examination which would cost more money.

How is polygraph any different? Why would someone expect a polygraph test to answer every question on the initial test if that initial testing is inconclusive? Why would they be upset with the examiner or the procedure if the diagnosis requires more testing and therefore more expense to reach a conclusion.

Get your slow running computer worked on or your home heating and air conditioning system and the results are often the same. you pay for the initial testing and analysis and if the probem is not resolved, more definitive work is required to get to the answer and solve the problem.

I've never had a doctor, mechanic computer geek or HVAC repairman apologize to me when they didn't get the answer on their initial testing. Why should someone expect something different from the polygraph?

In any of the examples listed above, however, I wouldn't want the "professional" to fudge the results in order to get a definitive answer and send me out the door happy but really cheated.

I say do the test the way it it designed, get the results and if the results are inconclusive, then let me decide whether I want to invest in additional testing to resolve the issue.

IP: Logged

Dan Mangan
Member
posted 01-03-2013 04:14 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Skip,

But we EXPECT to get hosed by mechanics, plumbers and computer technicians -- and get the runaround by the health care system.

Should polygraph be any different? I dunno. In the retail polygraph bidness, a pronouncement of inconclusive is often seen by the client as a convenient method to fleece people.

That's why my own policy is, and always has been, to offer a re-test at no charge.

I'm not looking to fudge any results. I simply asked if reviewing twice as many CQs as needed -- and having them pre-loaded into the PF question template for instant availability -- was problematic.

Ray speculates that it could bias the process toward a finding of NDI. Then, in the next breath, he says it all might be fine.

Ray's beef is with the novelty of the "new" question(s). But let's think this through...

When a subject bleeds to a CQ, we have to stop the flow. Don't the standby OTWWHD and BWYATM Band-Aids add novelty to the previously "set" CQs?

And how many times, after applying a OTWWHD or BWYATM Band-Aid, has the inattentive subject given the "wrong" answer yet again?

In those cases, where a seemingly more obtuse subject gives the wrong answer, isn't it natural for an examiner to lean into that CQ preface just a tad next time around? BESIDES what you already told me....

Isn't there novelty attached to that as well?

Hence my interest in having alternate "pre-approved" CQs ready to go.

If one were to rigorously review six CQs in the pre-test -- and I mean beat the snot out 'em -- why would novelty be that much of an issue?

If you run two charts and have a zero sum product, chances are you are heading for an inconclusive result.

For decades, Backster has made clear the potential woes of a "defective" CQ. His spot analysis protocol provides a remedy for such occurrences -- modifying the defective questions.

So, by extension, I was simply dispensing with the interruption necessary to apply the Band-Aid, and instead going directly to another (already familiar) CQ.

Dan


[This message has been edited by Dan Mangan (edited 01-03-2013).]

IP: Logged

Ted Todd
Member
posted 01-03-2013 07:35 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
“Skip, But we EXPECT to get hosed by mechanics, plumbers and computer technicians -- and get the runaround by the health care system.”

“That's why my own policy is, and always has been, to offer a re-test at no charge.”

Dan,
I see two interesting comments here. First off, I don’t go to any service provider expecting to get “hosed”. That is why I do my homework and shop around. I also do not let the health care system take advantage of me. It sounds like you are a unhappy consumer and a man that can't get answers from your own doctor?

Second, if you are doing any test at “no charge” shame on you. I know what my product is worth. Perhaps doing exams at no charge is what is making you so unhappy in this profession?

I took my truck into the dealership yesterday because the heater was working off and on. They could not find the problem but charged me $125.00 to look for it. They told me I also needed rear brakes, wiper blades, and an oil change and should consider two new tires. I got the wiper blades and oil change but I will get the tires and brakes from a buddy of mine who runs a Big O Tires store here in town.

You see Dan, I have a choice, and the choice was mine. I don’t think the dealership is going to offer to check my heater again at no charge. Why should you offer to take a second look at a problem at no charge?

Ted

IP: Logged

Bill2E
Member
posted 01-03-2013 11:29 PM     Click Here to See the Profile for Bill2E   Click Here to Email Bill2E     Edit/Delete Message
Dan,

Is there anything you are in agreement with other than yourself?

IP: Logged

Dan Mangan
Member
posted 01-04-2013 08:58 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ted,

quote:
Why should you offer to take a second look at a problem at no charge?

There are two reasons:

1. I have a genuine professional interest in understanding the reason for the inconclusive result, and then getting resolution the next time around.

2. Sometimes clients have spent a lot of money -- such as an attorney representing someone facing heavy time -- and I feel it's good practice to offer another shot. In modern industry, this is called "customer relationship management" (CRM). Google CRM (or go to Wikipedia) to learn more.

By the way, my offer of a free re-test does not apply to LEPET and PCSOT cases.

--------------------------------------------

Bill2E:

quote:
Is there anything you are in agreement with other than yourself?

Such is life for an iconoclast who walks among the followers of a quasi-religious cult where different views are summarily condemned as heresy.

Dan


[This message has been edited by Dan Mangan (edited 01-04-2013).]

[This message has been edited by Dan Mangan (edited 01-04-2013).]

IP: Logged

Ted Todd
Member
posted 01-04-2013 11:47 AM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
Dan,
There is no doubt in my mind that you offer excellent customer service. So do I. I just don’t do it for free. You get what you pay for. Here are a couple of business terms you may want to Google: “Insolvency”, “Overhead”, “Negative Spending” “How do I pay my bills?” and “Chapter 13”. If you ever get into the automotive repair business Dan, please let me know and I will be your first customer.

Ted

IP: Logged

Dan Mangan
Member
posted 01-04-2013 11:53 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ted, come to think if it, I'd be happy to take a look at your brakes. Bill's too.

But to your point about insolvency, overhead, Chapter 13, etc...

I don't have enough inconclusive exams for the economic impact of a free re-test now and then to even remotely matter.

The purpose of my original question in this thread was to simply explore ways to reduce inconclusive tests in general. Specifically, those instances when one has multiple LEPET or PCSOT exams in a single day.

Given that a lot of time is devoted to CQs in the pre-test, I simply broached the idea of take a few extra minutes to have additional "pre-approved" CQs at the ready, in order to (potentially, at least) save time later in the process.

Dan

[This message has been edited by Dan Mangan (edited 01-04-2013).]

IP: Logged

Ted Todd
Member
posted 01-04-2013 07:14 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
Dan,

Thanks for your kind offer to take a look at my brakes. I will be sending my mother in law over in her car tomorrow. If you do a "good job" in will send my wife over in the very near future. Do you offer family discounts?

Ted

IP: Logged

Dan Mangan
Member
posted 01-04-2013 07:46 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Trust me, my Backster brother, the old crone will be in the best of hands!

IP: Logged

Ted Todd
Member
posted 01-04-2013 08:52 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
Dan,
U DA MAN!
We will chat about the life insurance benefits in a later post!

Ted

IP: Logged

Dan Mangan
Member
posted 01-04-2013 08:55 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
No sweat, bro'. Ask me about DOUBLE INDEMNITY.

IP: Logged

rnelson
Member
posted 01-05-2013 08:02 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Sorry I could not respond yesterday - was busy in court discussing the meta-analysis and a Backster You-Phase exam.

I suppose if you are completely sure that your proposed modification will actually improve the polygraph then it would be OK.

Thing is this: accuracy of the polygraph is sufficiently high that actually improving it is difficult. And, it is actually easier to damage the accuracy of the test by making un-proven modifications. There is simply a lot more room for accuracy to go down compared to what little room there is for accuracy to go up.

Without evidence of improvement we are stuck with nothing more than "trust me I'm an expert." Now if you truly don't believe in reproducible results, and if you truly enmeshed with the need to intuit the test result, then this may sound good. But it is loosing ground.

If you are completely sure that your exams will never be QCd by an opposing counsel's expert, and if you are completely sure that your exams will never have to be discussed in court, then there will be no problem with making un-proven modificatons to your procedures and techniques. No-one will ever question your expertise or your results.

If you are not completely sure your test will never be QCd or that your tests will never require discussion in court, then it might be best to stick to established procedure. I prefer to conduct every exam as if I will have to defend it (with more than just my expertise). It goes like this: if you conduct the exam in the same manner that was used in the published studies, then it is reasonable to expect your accuracy to be similar to that in the published studies.

Of course, there is a very small number of examiners whose personal accuracy seems to be verified at or near 100% - in publication. If you are one of those individuals, and if you can get people to believe the reported perfect/near-perfect published personal accuracy rates, then you may or may not have to rely on validated procedures to support arguments regarding test accuracy.

BTW, you are simply wrong about video. Yes, there are some things that we can learn from video, and video is valuable. But there is quite a lot we can learn about an exam even without the video. My guess is that some important polygraph programs have found they can actually QC exams to their level of satisfaction without QC. We may not agree that it seems optimal, but it is not correct to arm-chair critique their decisions and mission priorities without all the information. Nothing good comes from that; Only division and misunderstanding.

Mostly, the goal of research is sometimes not to try to achieve perfection but to find out what we've got in real life. If we assume that polygraph is administered imperfectly at times, then what we want to know is this: how accurate or robust is polygraph accuracy under normal circumstances with degrees of imperfection. So, it is quite alright to review research sample data for basic procedural compliance (eliminating major protocol violations), and proceed with the assumption that the sample data are representative of the kinds of minor errors that may actually occur in real life, and that the resulting accuracy might be indicative of the kind of accuracy to expect under those circumstances.

You forget that a meta-analysis is a study of other published studies. We didn't collect the sample data. Other researchers did that.

I wonder if you feel so strongly about the need for video QC of research data that you reviewed all the video for the Mangan et al 2008 study? Or perhaps if you did not then you might feel strongly enough to publish an advisement/addendum that the correct execution of the exams is unverified?

Most of your arguments rest on dramatic sound-bites that have little actual meaning in polygraph and science (zero-sum, serene reflecting pool, bled-out). They make an impression but accomplish nothing. The requirements you would impose would again have us assume we know absolutely nothing until we have the final answer.

The point of research is to try to learn, despite an imperfect context. We control what we can, state our assumptions out loud, and try to name the things we cannot control. That is why scientific results are always probability statements with corresponding confidence levels of their own.

Bottom line: there are no final answers. We learn a little bit. Then we learn a little more. And there is always more to learn. I will argue that we have learned some things through the meta-analysis. I understand that your position is that some examiners have perfect accuracy, and short of that we know nothing so "trust me I'm an expert."

Sounds like platform shoes and rocks in the pockets to me. Sounds like you are offering to load the test.

Why not just run the test according to the established procedure? It will be easier to defend the test if the need ever eventually arises.

.02

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Dan Mangan
Member
posted 01-05-2013 11:12 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ray,

Let's get real.

No video = no verification. Period.

We're not talking about a facile and expeditious method for intra-agency QC.

We're talking about a self-published survey that is being sold to the public -- both figuratively and literally -- as being reflective of scientific methodology.

Absent video review, the survey makes claims that are arguably unsubstantiated.

Clearly, the APA went out on a limb.

What happened to the empirical approach? It sounds like the committee went through a bunch of bankers boxes full of polygraph data provided by entities that said, essentially, "Trust us, it's cool."

And that's what you're hanging your hat on?

On the other hand, perhaps no one should be surprised, as the APA still endorses trickery as part of the stimulation "test."

--------------------------------------------

Re: novelty...

Why would a reviewed CQ that wasn't presented until the second or third chart be any more novel than a CQ that had the red-flag other than or besides what Band-Aid applied just moments before?

If using an alternate CQ -- remember, one that was already reviewed -- tilts the process that much, what does that say about the inherent robustness of the "test"?

Dan

IP: Logged

dkrapohl
Member
posted 01-05-2013 06:43 PM     Click Here to See the Profile for dkrapohl   Click Here to Email dkrapohl     Edit/Delete Message
Dan:
For the record, no published study to my knowledge has ever involved monitoring video recordings of polygraph exams to ensure the testers followed the protocol, including yours. Probably true for test validators in any field. This is certainly an empty and unproductive argument against polygraph research. You'll have to dislike the APA meta-analysis for other reasons.

However, to your point about pretesting multiple CQs, this idea has been floating around for a number of years. On the face of it, it seems reasonable, Ray's (and my) reservations notwithstanding.

I would like to offer another approach, equally un-validated, but perhaps less objectionable. Had you considered monitoring your examinee's EDA during the pretest interview? EDA provides a really good index of the arousability of stimuli, and this capability might be useful in helping decide which CQs you do insert into your test. Recently some colleagues and I conducted a very small field study looking at the possible use of EDA monitoring during the pretest interview in screening exams. Generally, the survey of examiner attitudes showed almost no advantages or disadvantages to EDA monitoring, except when I came to their selection of DLCs. The effect was not 100% for that one, but definitely significant and in the "beneficial" direction. It does not seem reasonable to me that relying exclusively on examinee behavior, gestures, hesitations, word choices, or pre-printed lists of CQs will always give the examiner what s/he needs to develop effective CQs for a given examinee. I am with Ray on the point that we should not be changing the test while we're running the test, but perhaps using physiological signals in an unconventional way can help us avoid bad CQs before its too late to do anything about them. I believe they can in certain circumstances. You might give it a try, but I recommend you don't let the examinee know you're doing it. Also, a final suggestion: turn off self-centering, compress the time window, ignore phasic responses and look for tonic shifts. I will have more to say about this at the APA seminar.

Good luck.

Don

IP: Logged

Dan Mangan
Member
posted 01-06-2013 09:32 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Don,

The CQ has always struck me as being (another) weak link in the "test".

I remember my eagerness as a Backster student in 2004, anxious to learn the secret of polygraph.

When we were taught CQT theory, it was quite a letdown. My thoughts were "Huh? That's it? You must be kidding me."

In all candor, I felt disappointed and somewhat shortchanged. Had I learned CQT theory on Day 1 of class I may well have walked.

It's my own fault for not doing due diligence. (BTW, that's why my web site says what it says these days -- and seldom does an educated person buy into the polygraph myth sufficiently enough to sit for the "test.")

To your inquiry about EDA monitoring in the pre-test...

I have serious doubts about surreptitiously capturing/observing usable signals without detection. But I'm all for trying to find a better way to select CQs. I have been for a while.

When I completed Backster, I was determined to find a better way to select CQs. I spent a lot of money on "lab rats" at $40 per session to sit for exams involving mock crimes. The exam included a lengthy CQ selection test, which I used in place of a stim/ACQ.

I wanted the CQ selection test to mimic a conventional series, so it started with a neutral, SR and SYM (modified to reflect the nature of the test).

What followed were seven CQs, followed by another neutral.

Back in those days I was into heavy analog mode. Of course, I had an LX4000, which I bought (and used) at Backster, but I loved the "purity" of analog -- and being released from the influence of PolyScore.

I used 15-second windows and asked a mix of CQs: the old standbys, exclusives, inclusives, situationals, etc.

"My" approach -- I'm sure it's been tried by others -- worked. I remember dropping in at the Backster school while vacationing in Coronado and breathlessly telling Cleve all about my success.

In one fell swoop, he dismissed it as being unnecessary.

I was crestfallen, and felt that it must be me who's somehow failing on the old standbys.

My lab rats experiment ceased. Later, when I started testing skinners behind the walls, I occasionally used a variation of my CQ selection test, just to reshuffle the deck once in a while, so the subjects might be caught off guard.

Problem is, the CQ selection test can take a lot of time, which, ironically, is not on your side in a prison setting. The reason: Inmate movement is heavily restricted, and supervision (having a CO nearby, but out of the room) was limited. Thus the windows one has to conduct a polygraph are shorter than one would like. Some of the discussions around CQs can really take you down a rat hole, and that burns up a lot of time.

So, while I don't see myself doing any secret EDA pre-test monitoring, I'm all for finding a better way to avoid the CQ crap-shoot.

Dan

[This message has been edited by Dan Mangan (edited 01-06-2013).]

IP: Logged

Barry C
Member
posted 01-06-2013 10:28 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Why don't you just do a study of CQs? Come up with a list of options and ask people their subjective opinions of level of concern. I considered doing just that with academy students but never got around to it. I'd be curious to know what types of questions people find bothersome.

You don't need video to do what it is the MA committee did. I've never heard of video use in any field, and it's certainly not listed in any of the how-to articles on MA. The videos would give you more information to code, and with that you could better gauge what is and is not important. You've put the cart before the horse and made assumptions about those answers already, so your logic is flawed and conclusions invalid.

If you wanted to look at videos, you wouldn't look at them all anyhow. You wouldn't gain much from doing so. All you need is a random sample of the data to investigate your concerns. Your real issue is what variables explain variance better. We do so well now, I'm not sure that you'd learn anything with all the extra work. Of course, you wouldn't want just one person to review videos: you'd want several. That way, you could study the variance you'd see with multiple reviewers, and now you're on to a different project. From there, using you're logic, we'd need to video what each reviewer did to make sure they did the QC correctly (a subjective standard?). Then, a video of those folks.... Now you have an infinite regression of reviewers and you quickly learn (maybe) that your solution is self-refuting. Unless, of course, you decide there comes a point at which enough is enough and you have the information you need to know enough but not everything. That's what the committee did. In the end, it gets down to a question of what is the more reasonable approach. In research it always comes down to weighing costs and time against statistical confidence. It doesn't take long to get to the point at which more work yields close to zero gain.

IP: Logged

skipwebb
Member
posted 01-07-2013 11:27 AM     Click Here to See the Profile for skipwebb   Click Here to Email skipwebb     Edit/Delete Message
Dan,
We're bombarded with research from all over the world on a daily basis telling us that taking fish oil is beneficial or Vitamin D should be taken or Prilosec does a better job for GRD than Zantac.

These thousands of participants who "allegedly" took the different drugs properly nor the medical personnel who "allegedly" properly administered the drugs and observed the differences were video-taped??????

IP: Logged

Dan Mangan
Member
posted 01-07-2013 11:45 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
C'mon, Skip, that's apples and oranges.

Would it matter if the doctor or nurse helping to run the drug study screamed at the patient as they ingested the wonder med? No.

Would it matter if the doctor or nurse helping to run the drug study gave the wrong med? Yes.

How many times have we heard -- or said -- that the most important part of the polygraph equation is the examiner?

For all of its claimed "robustness," polygraph is an exceptionally fragile process.

Polygraph validity is a religion. Hell, the APA says so:

quote:
The American Polygraph Association (APA) believes that scientific evidence supports the validity of polygraph examinations that are conducted and interpreted in compliance with documented and validated procedure.

Note the use of the word believes.

The APA can't say, it knows or has verified the legitimacy of the "scientific evidence," because they didn't actually conduct a video review of the procedures -- which is certainly REQUIRED in any meaningful QA/QC review.

Rather, a bunch of like-minded followers gave the APA high priests a bunch of selected stuff and said, "Trust us, it's cool."

The APA took it all on faith -- because they want to BELIEVE.

Amen.

IP: Logged

Bill2E
Member
posted 01-07-2013 12:43 PM     Click Here to See the Profile for Bill2E   Click Here to Email Bill2E     Edit/Delete Message
"Dan is the primary author of A Field Study on the Validity of the Quadri-Track Zone Comparison Technique, published in the September 2008 edition of Physiology & Behavior, a peer-reviewed journal."

Dan,

In your study, did you review every polygraph examination conducted and review every video tape of every polygraph used in the study?

Some of your information regarding polygraph is on target, other areas seem to be slanted to your personal opinion without the benefit of field studies or lab studies. I have noticed you are not impressed with Cleve Backster or his methodology, and had you known what polygraph truly represented before spending money on the schooling, you would have not gone into polygraph. Why now do you stay in the field knowing it is so problematic and totally in error in so many aspects?

IP: Logged

rnelson
Member
posted 01-07-2013 01:28 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Bill,

Don't worry too much about convincing Dan. He will not allow that. It seems though that his perspective has changed a little over time, and that is OK. We learn more and make us of new knowledge. So, Dan will modify his position on things when his independent mind has had a chance to evaluate everything and reach his own conclusion.

I would not assume that Dan is not impressed with Cleve Backster. Its just not that simple.

Dan may have a polemic way of saying things - to the point of sometimes losing the accuracy of his message some times - but at least we get to hear it.

The value in all this argument is that Dan is both knowledgeable about the polygraph and has experience outside the polygraph profession. So he will be able to say things that we need to hear and think about regardless of whether we like it or agree. We will then have the opportunity to further clarify our own information and our own position on things.

We'll all learn something.

For example, having thought this through more fully, I am less concerned now about not having access to the video. No study and no sample is perfectly representative. I am confident that we made more than reasonable efforts to investigate and describe the representativeness of the study samples included in the meta-analysis.

People will try to repeat the hyperbole - for example: there is no scientific consensus... yadayadayada. My answer in court the other day was that there is actually a lot of consensus in the evidence about the expected accuracy of the polygraph. And the publication of the NRC (2003) report - and the fact that they did everything possible to criticize and disparage the polygraph and the profession, and in the end concluded it works - does begin to establish a scientific consensus.

Think about this: if Dan is putting as much energy in to marketing his bid'ness as teasing out detailed criticism of the profession, then he may end up to be quite successful in the end.

Probably time to put this one to rest and move on to the next brawl.

As to flipping CQs, I think we could rationalize that it could either help or hurt the test. That is why we should study it before experimenting on the public.

I like the idea of practice guinea pigs. Great idea to gain experience - and maybe even do a pilot study. $40 a shot can add up fast tho.

Happy New Year again everyone.

.02

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Dan Mangan
Member
posted 01-08-2013 08:16 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Bill,

No, I didn't do any video review. (I blind scored 20% of the 140 cases.) But there are some key differences between my study and the APA's meta-analysis...

o The APA's meta-analysis is self-published. Physiology & Behavior didn't have to publish my study.
o I'm not representing my findings as being representative of polygraph accuracy at large.
o I'm not $elling my findings.

Regarding my feelings about Cleve Backster, you couldn't be more wrong. Having Cleve as a teacher (and post-grad mentor) was a great experience. I dare say that I'm still star struck.

That said, my main frustration with Backster -- and to an even greater extent Matte -- is their utter resistance to any changes whatsoever to their methods.

For example, I think that situational CQs are great, but neither Cleve nor Jim will budge. Same for their scoring thresholds.

I fail to understand why are they so singularly stuck in their ways.

What does fail to impress me, however, is the flimsy theory behind the CQT.

Why do I continue to do polygraph work despite my reservations? Fair question.

Most of my activity these days is offering reasoned and rational remedies to people who have been victimized by someone else's polygraph "test."

Usually, those remedies require that I administer my own examination, almost always a Matte Quadri-Track. In other cases, it's a simple matter of shedding light on the realities of polygraph "science" that the test-taker was not aware of.

Dan

IP: Logged

skipwebb
Member
posted 01-08-2013 10:45 AM     Click Here to See the Profile for skipwebb   Click Here to Email skipwebb     Edit/Delete Message
Doug Williams says:

"Police polygraph expert, Doug Williams will get you properly prepared to pass your polygraph test. In fact, he is the only one who can get you properly prepared because he is the only one with authentic credentials, a technique that is tested and proven to be effective, and the demonstrated ability to teach you how to pass a polygraph."

Dan says:

"Most of my activity these days is offering reasoned and rational remedies to people who have been victimized by someone else's polygraph "test."

Usually, those remedies require that I administer my own examination, almost always a Matte Quadri-Track. In other cases, it's a simple matter of shedding light on the realities of polygraph "science" that the test-taker was not aware of.

Wow! They sound very much alike. both are the consumate expert and both have the only technique that works and both help "victims" of everyone else's failures or shortcomings.

Sounds like a great setup for a business merger!

IP: Logged

Dan Mangan
Member
posted 01-08-2013 11:50 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Skip,

Actually, it's a simple matter of supply and demand.

Consider:

>>>The cops, generally speaking, cast a wide net during their polygraph "test" in order to prime the big bad DECEPTION INDICATED Polyscore result banner -- in blood red, no less -- from which springs their electronic rubber hose interrogation.

>>>The busiest private examiner in New England -- outside of PCSOT applications -- typically runs a 4-RQ Backster exploratory, from which he makes split calls.

You can imagine the level of customer satisfaction with that approach.

>>>Some persons (or couples) take a cautious, measured step towards a polygraph "test," and want an in-person consultation during which they learn about the risks, realities and limitations of the polygraph.

In each of these cases, the victims (or prospective clients) need some to turn to.

I simply provide them with a rational alternative. Why is that bad?

Dan

IP: Logged

skipwebb
Member
posted 01-08-2013 03:15 PM     Click Here to See the Profile for skipwebb   Click Here to Email skipwebb     Edit/Delete Message
I ‘m a skeptic the minute someone starts a discussion with a statement such as "The cops generally speaking.....".

That's a very large basket to carelessly throw people into. I can't speak for "cops, generally speaking.." but the last thing my organization wants to do is run a false positive by "casting a wide net". In fact, one of the primary considerations we must address when we submit a request to conduct a polygraph test is whether there is a sufficient dichotomy upon which to form valid questions that the examinee can pass if he is not the 'bad guy". We are prohibited from screening a bunch of folks who "might" have had access. In fact we must run the primary suspect first and have those results in hand before we can even ask to do a second suspect in an investigation.

I'm aware that "some cops" might use the polygraph as an interrogational prop for the same reason "some cops" use the CVSA or routine consent searches for the same reason. I don't condone it. I'm just aware that it sometimes happens.

I'm also aware that attorneys often have their clients take a civilian administered test prior to sitting for one of our tests and I certainly don't have any issues with that.

Believe it or not, in 28 years as an examiner and over 40 years as an agent, I've met many hundreds if not thousands of very professional, conscientious polygraph professionals and a handful of charlatans. I could make the same statement about doctors, professors and attorneys with whom I have come into contact.

My concern stems from what I perceive as a distinct negative bias on your part towards any polygraph examination conducted by anyone other than you and your insinuation that examinees need your protection from everyone else.

I’m also concerned that your zeal to “explain the polygraph” might just morph into a Doug Williams like rant that could actually prevent an examinee from sitting for and passing a polygraph that would have cleared them of the alleged wrongdoing.

IP: Logged

Dan Mangan
Member
posted 01-09-2013 10:43 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Skip,

The state police polygraph operators in my general area (adjacent state, actually) use screening questions when they should be using diagnostic questions. Why?

I think we know why.

quote:
My concern stems from what I perceive as a distinct negative bias on your part towards any polygraph examination conducted by anyone other than you and your insinuation that examinees need your protection from everyone else.

I’m also concerned that your zeal to “explain the polygraph” might just morph into a Doug Williams like rant that could actually prevent an examinee from sitting for and passing a polygraph that would have cleared them of the alleged wrongdoing.


I'm glad you're concerned, Skip. I'm concerned too.

I'm concerned when a polygraph "expert," lecturing to college kids, says "The polygraph knows when you lie."

I'm concerned when another polygraph expert, lecturing to future lawyers -- and faculty -- at a law school, runs what appears to be a trick stim "test."

I'm concerned that APA leadership openly endorses trickery.

I'm concerned that trickery is taught at APA seminars.

I'm concerned that the individual who controls the official worldwide polygraph narrative, chiefly defined by the APA's two house organs, says polygraph is as accurate is film mammography. That comparison is ludicrous.

I'm concerned that polygraph apologists insist the "test" is robust, when the reality is it's as fragile as can be.

I'm concerned that the process is so fragile that holding off on asking a previously reviewed CQ until after the first chart will somehow skew the "test."

I'm concerned that the process is so fragile that reducing things to writing for a statement test -- as pioneered by Stan Abrams in cases of understandably provocative questions -- is regarded as "bad polygraph."

I'm concerned that the LEPET "test" - a multi-dimensional, multi-level, multi-facet, muliple-times-removed statement test of its own -- is regarded as "good polygraph."

I'm concerned that there are no independent studies attesting to the scientific validity of PCSOT or LEPET "tests," or even incident-specific polygraphs.

I'm concerned that using a PCSOT-style model for the domestic violence treatment triangle is rapidly gaining traction as the next ca$h cow for the polygraph "testing" indu$try.

I'm concerned that the polygraph "test" is inherently biased against honest subjects.

I'm concerned that the feds don't video record their polygraph "tests."

I'm concerned when I hear about the methods the FBI used in Higazy's polygraph "test."

I'm concerned about the secrecy surrounding federal polygraph programs.

I'm concerned when I read that some federal examiners (or contractors) have reservations about the questionable methods used in their polygraph programs.

I'm concerned that examiner bias is such an important factor that that APA runs lectures to help keep its own in check.

I'm concerned that the APA is selling (literally) a half-baked "meta-analytic survey" that is devoid of any independent QC, and is representing their beliefs as "science."

I'm concerned anyone who vigorously challenges APA groupthink is labeled as a troll, bigot, anti-polygraph polygraph examiner or charlatan.

I'm concerned that the APA, for all of its claims, bravado and bluster, is sh*t-scared of accepting the A-P countermeasure challenge, or even doing something along those line in house.

I'm concerned that the American Psychological Association says polygraph "testing" is hokum.

I'm concerned that the American Medical Association says polygraph "testing" is hokum.

I'm concerned the the Supreme Court of the United States, in a majority opinion, says polygraph testing is unreliable.

I'm concerned that the only people who believe in polygraph are those with a vested interest in polygraph.

I'm concerned that polygraph apologists are all for "practical polygraph," but circle the wagons when CVSA makes advances.

I'm concerned that two different polygraph scoring algorithms, using the same data, give two different results.

I'm concerned that polygraph operators, generally speaking, discourage prospective test-takers from researching the "test" prior to taking it.

To be continued.

I have to do final prep for another "collateral damage remedy" exam at noon.

Dan

[This message has been edited by Dan Mangan (edited 01-09-2013).]

IP: Logged

dkrapohl
Member
posted 01-09-2013 12:57 PM     Click Here to See the Profile for dkrapohl   Click Here to Email dkrapohl     Edit/Delete Message
Dan:

I am frequently slow on the uptake, so bear with me. Did I understand your post to say you had to stop adding to your list against the polygraph to go run a polygraph?

Don

IP: Logged

Ted Todd
Member
posted 01-09-2013 09:05 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
"ZZZZZZZZZZZZZZZZZZZZZZZZZZ" And some more: "ZZZZZZZZZZZZZZZZZZZZZZZZ"

"I'm concerned the the Supreme Court of the United States, in a majority opinion, says polygraph testing is unreliable".

Dan, check out US V Cunningham at the US Supreme Court. That was one of mine. Read the opinion on polygraph that was part of the decision.
We all will continue to do geat work every day with, or without you.

Ted

[This message has been edited by Ted Todd (edited 01-09-2013).]

IP: Logged

Dan Mangan
Member
posted 01-10-2013 10:45 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Loyal viewers,

Please note how Don shifts the focus from polygraph's glaring faults and abundant vulnerabilities and instead places it on me.

It's an old, but effective, tactic. Don's post reminds me of Joe Biden's performance in his debate with Paul Ryan.

Currently, this method is what the liberal media and the gun-control lobby is doing with the Sandy Hook Elementary School massacre.

The anti-gun crowd (aided by their puppets in the media) focuses not on Adam Lanza's mental health, his bizarre lifestyle, his worship of Satan, or the psychotropic medications he was taking, but instead focuses on the convenient whipping boy: the inanimate gun.

Don uses the same tactic. Shameful.

But hey, I can play the same game:

Donald J. Krapohl, a career federal polygraph program bureaucrat, polygraph evangelist and editor-in-chief of the American Polygraph Association's self-promoting house organs, is arguably the Joesph Goebbels of lie detection whoredom.

As one of polygraph's most ardent apologists, Krapohl is known for his zealous claims of polygraph's scientific validity, his years of devotion to modernizing and expanding polygraph propaganda, and for his ruthless treatment of vocal polygraph realists who do not toe the party line.

See how easy that is?

OK, back to our regular programing..

Now, given that Don claims to be a little slow on the uptake now and again, I'll provide the "short yellow school bus" explanation...

Don,

Please note in my previous post that I said I had to prep for a collateral damage remedy polygraph. That means when a person is a victim of a false or inconclusive result from a polygraph "test," a remedy is usually desired. I provide that remedy. The false and inconclusive results comprise the "collateral damage." It's that simple.

Ted, mea culpa. I should have said:

I'm concerned that the Supreme Court of the United States majority opinion in the Scheffer case, states: "there is simply no way to know in a particular case whether a polygraph examiner's conclusion is accurate, because certain doubts and uncertainties plague even the best polygraph exams."

In all candor, Ted, I have no doubt that you and many, many others in the field have done -- and will continue to do -- great work. However, a substantial body of the great work has nothing to do with science, and everything to do with art or expertise.

I am not down on polygraph, but I am down on it being $old by high-profile polygraph pimps, hawkers and apologists as being scientifically vetted on the wholesale level.

Dan


[This message has been edited by Dan Mangan (edited 01-10-2013).]

IP: Logged

Bill2E
Member
posted 01-10-2013 01:10 PM     Click Here to See the Profile for Bill2E   Click Here to Email Bill2E     Edit/Delete Message
"I am not down on polygraph, but I am down on it being $old by high-profile polygraph pimps, hawkers and apologists as being scientifically vetted on the wholesale level."

It's ok for you to sell polygraph as unscientific and open book, it is not ok for others to market it as scientific. It is wrong to use established methods that have worked rather well for years, which you call totally unacceptable, because of your ethics. And my ethics are in question? You are one warped puppy Dan.

IP: Logged

Dan Mangan
Member
posted 01-10-2013 02:37 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Hold on, Bill.

Do you remember Wendy's famous "Where's the beef?" commercial from the 1980s?

Take a look back: http://www.youtube.com/watch?v=Ug75diEyiA0

The same basic question applies to our discussion.

Where is the independent, objective, controlled and verified BLIND scientific studies that prove polygraph "works" IN THE FIELD at the accuracy levels that the indu$try claims it works?

Where's the beef, Bill? Where?

IP: Logged

rnelson
Member
posted 01-10-2013 02:47 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Dan:
quote:
...

scientifically vetted on the wholesale level


First, let me state again that I do not believe that Dan is the enemy, but it's better to know your enemy and the perceptions he may hold. Dan has an odd way of saying things these things and sticking it in our face.

We can limit our response to simple annoyance, or we can learn from it.

Second, there is NO science Tzar that will ultimately rubber-stamp us as "vetted on the wholesale level." If you endorse this language and these false metaphorical implications, then, you will have allowed Dan and others to paint us into conditions that are impossible to satisfy. Which may be what he wants - with his special fondness for the "anti-science-but-trust-me" model of professional polygraph.

There is no person or committee who has ultimate authority in science. Science is a consensus based on the convergence of evidence. But our our critics want us to feel sheepish and insecure because we've not been rubber-stamped.

Is there evidence? Sure. Is it perfect? Nothing is perfect? Is it useless evidence? Not at all useless? Is there more to learn? Always. Does the evidence converge to support a conclusion? Some does. Some does not. That is always the case. The nice thing is that we can evaluate the evidence that converges and the evidence that does not, and reach intelligent conclusions about which evidence we think is more likely to be incorrect. BTW – my opinion is that the incorrect evidence rests mostly in the studies that report ~100% accuracy.

In the end we have to earn the consensus of science and we have to earn the respect of other professions. Can we do that? Of course. We have already started. But it is slow and takes time. Are we done? Not hardly. Will we ever be done?There is always more to learn - we'll be out of the game when we quit learning. Is it the goal to earn respect or consensus? No. The goal is to do good work, and to survive (part of survival involves economic$ - no shame in that).

One thing is certain: if we do not bother to try – or if we are simply too arrogant to care – to account for ourselves then we will get no-where and we are at risk for ceasing to exist in any relevant way.

So.......................

quote:
...pimps, hawkers and apologists...

Are these accurate criticisms?

How can we tell if they are or are not accurate?

Pimps? A metaphor implying that we are interested in nothing but money and narcissistic self-aggrandizement even if it involves the exploitation of others.

I do not agree that I am only interested in money and self aggrandizement. And I not agree that I am interested in the exploitation of others. I have done a great deal of unpaid work on behalf of the polygraph profession, and it is highly likely that others have made more money than I based on the work I have done. OSS-3 was a few thousand hours of unpaid work to develop and validate. I have never been paid for providing training on the OSS-3. Programmers put in hundreds of paid work hours.

The meta-analysis was hundreds of hours of unpaid work. I have never been paid for providing training in that area either, though my travel and lodging expenses have been paid - otherwise it would be out of pocket expense.

Hawker: a slightly less derogatory term. It implies that we are interested in nothing more than selling something, and will engage in aggressive tactics to do so. I invite anyone to look at the evidence and decide for themselves whether our work has been for the benefit of the profession or simply for the benefit of selling something. I suppose you could extrapolate benefiting the profession as selling something - but I think if you look carefully we have also been as tough on ourselves, within the profession, as outsiders might be.

Apologists: a word that comes from the early history of the Christian church and writers who addressed the criticisms of Rome regarding the "new" religion. Use of this word in science refers to researchers who start with the conclusions they wish to support (i.e., xyz polygraph technique has near perfect accuracy) and proceed to develop evidence as proof - as opposed to starting with a question/hypothesis, and being willing to prove it wrong and discuss the problems.

There have been some things that we have shown evidence of support, and there have been some favored practices for which we have actually shown the evidence does not support. Read the meta-analysis carefully and you'll see what we described the evidence as saying about some favored practices. To our profession's credit, most people tolerated this rather well.

But there seem to be a small number who are dismayed by the call-it-like-it-is approach, , including data that cannot be reconciled, important research questions that have been simply ignored, old psychological hypothesis that are so badly incomplete they must be regarded as false, physiological discourse that is inconsistent with discourse in physiology, decision models that lack scientific foundation, research methods that have been sometimes quite flawed, and the tendency for polygraph accuracy to have been over-estimated as a results of pervasive misunderstanding about research sample construction (confession criteria and field sampling). While supporting the validity of the polygraph we have, in fact, incurred ire of some by describing these problems in writing that becomes publicly visible. How is that selling anything - except to make the impression of a profession that is much better prepared to account for itself than ever before. It is, in fact, the only correct way to sell and market the polygraph profession.

I understand that some are very comfortable with the the "trust-me-I'm-an-expert" mode of polygraph, and are opposed to the idea of accountability and science.

I think the important thing to consider is this: Is the “polygraph-ain't-science-but-trust-me-I'm-an-expert” model of polygraph something you feel so strongly about that you would have the profession put that in writing and take to legislative and program funding discussions? Or courtroom discussions?

The old hyperbole - "the test is only as good as the examiner" or - "the accuracy of the test depends on the examine..." are old-fashioned dodges used to sell confidence and avoid answering the real question:

The real question is this: assuming that there is some structured procedure, and assuming a normal degree of competence and protocol error in the test administration, how accurate is the test then? Or, how accurate are most exams from most examiners?

“Test-accuracy-depends-on-the-examiner” is exactly the kind of answer that reeks of apologetic non-sense and screams that we are either un-prepared or unable to actually answer the test accuracy question.

There is obviously a little more to it, but the real question is definitely not this: "how accurate are perfect exams from the most experienced expert examiners on earth with perfect protocol compliance and perfect normal examinees?" That is not how real work happens.

We know the answer to this question from the studies on some of the boutique proprietary technique: ~100% (perfect or near perfect).

What is surprising is that our profession was so desperate for confidence and so under-aware of the real issues that we actually considered endorsing or embracing those reported results.

Do we really think that reporting ~100% accuracy will help the profession? No.

Do we really think that ~100% accuracy rates are reliable and reproducible by most examiners in most neighborhoods, with most examinees? No. Even Dan says "no" to this.

How do people achieve results like ~100% accuracy if they are not reproducible or if we cannot expect these level of accuracy in reality? Researchers seem to be motivated to find and report certain results, and are not free to find or describe problems.

Isn't that like... apologetics? Yep.

Why would someone report ~100% accuracy results if they are not reproducible or if we cannot expect ~100% accuracy in reality? Selling something. Selling confidence. Selling training. Selling books. Selling their unique skills/services.

Selling? Like hawking? Yes.

Is it wrong do sell this? Probably not wrong in the legal sense.

Are there problems with publishing selling ~100% accuracy? Yes.

It fosters misguided expectations, makes the profession look silly, is exploitative of peoples' desire for confidence, and is exploitative of people's naivete.

Is publishing ~100% accuracy exploitative? Like pimping? Yes.

It is exploitative of publishers. It is exploitative of the public who may read it. And it is exploitative of the profession that may be tempted to be influenced by it. Of course, if the polygraph technique is truly ~100%, and if we can expect this kind of accuracy from most examiners under most circumstances then it is definitely not exploitative.

So, Pimps and Hawkers and Apologists...

Oh my.

Please don't be afraid of science. Its not that scary and not that difficult, and it will be more effective for our profession to account for ourselves than to emphasize an anti-science "trust-me-I'm-an-expert" model in which validity has nothing to do with the test data and reduced to little more than a beauty contest of expert resumes.

In the beauty-context/expert-practice model there is little room for competent examiners to succeed unless they have been anointed by a guru or successfully proclaimed themselves or convinced other have achieved guru stature.

That is not going to build a healthy and successful future for the profession. It will be better to account for ourselves and emphasize teachable, learnable, reprudicible polygraph techniques (that of course still require aptitude in the same what that doctoring and engineering requires certain aptitudes). But we know what to expect in terms of test accuracy from competent professionals, and anyone who does excellent work can have a chance at a successful professional career.

Eventually, when prompted to answer critical questions in court or during funding decisions we will answer by offering some form of scrutible evidence or an inscrutible attitude of "polygraph-ain't-science-but-trust-me-I'm-an-expert." One of these is pimping, hawking, and apologetic; the other is not.

.02

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

This topic is 2 pages long:   1  2 

All times are PT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply
Hop to:

Contact Us | The Polygraph Place

Copyright 1999-2012. WordNet Solutions Inc. All Rights Reserved

Powered by: Ultimate Bulletin Board, Version 5.39c
© Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 1999.